\[\begin{equation} H_i = a + b S_i + \epsilon_i \end{equation}\]
where \(H_i\) is the house price; \(S_i\) is the size in square feet; and \(\epsilon_i\) is an error term.
Choose a model to minimise the sum of squared errors, that is:
\[\begin{equation} L = \frac{1}{K} \sum_{i=1}^{K} \epsilon_i^2 \end{equation}\]
which is the same as choosing values of \(a\) and \(b\) to minimise:
\[\begin{equation} L = \frac{1}{K} \sum_{i=1}^{K} (H_i - (a + b S_i))^2 \end{equation}\]
Least absolute deviation loss:
\[\begin{equation} L = \frac{1}{K} \sum_{i=1}^{K} |\epsilon_i| \end{equation}\]
Quartic power loss:
\[\begin{equation} L = \frac{1}{K} \sum_{i=1}^{K} \epsilon_i^4 \end{equation}\]
Suppose we want to minimise a least-squares loss function:
\[\begin{equation} L = \frac{1}{K} \sum_{i=1}^{K} (H_i - (a + b S_i))^2 \end{equation}\]
Choose \(a\) and \(b\) to minimise this loss \(\implies\) differentiate!
determine \(\hat{a}\) and \(\hat{b}\) as those minimising \(L\):
\[\begin{align} \frac{\partial L}{\partial a} &= -\frac{2}{K}\sum_{i=1}^{K} (H_i - (a + b S_i)) = 0\\ \frac{\partial L}{\partial b} &= -\frac{2}{K}\sum_{i=1}^{K} S_i (H_i - (a + b S_i)) = 0 \end{align}\]
For general loss functions, no solution exists. That is, usually equations like:
\[\begin{align} -\frac{2}{K}\sum_{i=1}^{K} (H_i - (a + b S_i)) &= 0\\ -\frac{2}{K}\sum_{i=1}^{K} S_i (H_i - (a + b S_i)) &= 0 \end{align}\]
have no solution. (Here, they actually do.)
Instead of solving equations directly, we use gradient descent optimisation
\[\begin{align} a &= a - \eta \frac{\partial L}{\partial a}\\ b &= b - \eta \frac{\partial L}{\partial b} \end{align}\]
until \(a\) and \(b\) no longer change. \(\eta\) is the learning rate
\[\begin{equation} H_i = a + b S_i + c S_i^2 + \epsilon_i \end{equation}\]
What does this model look like?
Least-squares loss function:
\[\begin{equation} L = \frac{1}{K} \sum_{i=1}^{K} (H_i - (a + b S_i + c S_i^2))^2 \end{equation}\]
\[\begin{align} a &= a - \eta \frac{\partial L}{\partial a}\\ b &= b - \eta \frac{\partial L}{\partial b}\\ c &= c - \eta \frac{\partial L}{\partial c} \end{align}\]
\[\begin{equation} H_i = a + b S_i + c S_i^2 + d S_i^3 + ... \epsilon_i \end{equation}\]
Hold out a separate validation set to test model predictions on
No:
Denote:
Two outcomes: so is like flipping a coin!
Denote:
where \(0\leq \theta \leq 1\). We can represent in the following probability distribution:
\[\text{Pr}(y_i=y) = \theta^y(1-\theta)^{1-y} \]
which is known as the Bernoulli distribution:
\[\begin{equation} y_i \sim \text{Bernoulli}(\theta) \end{equation}\]
Suppose you flip coin twice: \(y_1=1\), \(y_2=0\). Assuming independence:
\[\begin{equation} \text{Pr}(y_1=1,y_2=0|\theta) = \theta \times (1-\theta). \end{equation}\]
We call \(L(\theta)=\text{Pr}(y_1=1,y_2=0|\theta)\) a likelihood.
Want to choose \(\theta\) to maximise probability of obtaining those results
Find maximum by differentiating:
\[\begin{equation} \frac{d L}{d\theta} = 1 - 2 \theta = 0 \end{equation}\]
Rearranging, we obtain:
\[\begin{equation} \theta = \frac{1}{2} \end{equation}\]
In logistic regression, we use logistic function:
\[\begin{equation} \theta = \frac{1}{1 + \exp (-x)} \end{equation}\]
We want to estimate how sensitive presence / absence of lung cancer is to tar, so model probability:
\[\begin{equation} \theta_i = f_\beta(x_i) := \frac{1}{1 + \exp (-(\beta_0 + \beta_1 x_i))} \end{equation}\]
which is known as logistic regression and assume:
\[\begin{equation} y_i \sim \text{Bernoulli}(\theta_i) \end{equation}\]
Data for one individual \((x_i,y_i)\) has probability:
\[\text{Pr}(y_i=y) = f_\beta(x_i)^y(1-f_\beta(x_i))^{1-y} \]
Suppose we have data \((x_1,y_1=1)\) and \((x_2,y_2=0)\).
Assume data are:
Then overall probability is just product of individual:
\[\begin{array} L &= f_\beta(x_1)^{y_1} (1-f_\beta(x_1))^{1-y_1} f_\beta(x_2)^{y_2}(1-f_\beta(x_2))^{1-y_2}\\ &= f_\beta(x_1) (1-f_\beta(x_2)) \end{array}\]Same logic applies under i.i.d. assumption:
\[\begin{equation} L = \prod_{i=1}^{K} f_\beta(x_i)^{y_i} (1 - f_\beta(x_i))^{1 - y_i} \end{equation}\]
Unlike the simple coin flipping case, there is no analytic solution to the maximum likelihood estimates. Instead, do gradient descent:
\[\begin{align} \beta_0 &= \beta_0 - \eta \frac{\partial L}{\partial \beta_0}\\ \beta_1 &= \beta_1 - \eta \frac{\partial L}{\partial \beta_1} \end{align}\]
where \(\eta>0\) is the learning rate.
Suppose we estimate that \(\beta_0=-1\) and \(\beta_1=2\). What do these mean?
\[\begin{equation} \theta_i = \frac{1}{1 + \exp (-(-1 + 2 x_i))} \end{equation}\]
so impact of incremental changes in \(x_i\) on the probability of lung cancer is nonlinear
\[\begin{equation} \theta_i = \frac{1}{1 + \exp (-(-1 + 2 x_i))} \end{equation}\]
meaning
\[\begin{equation} 1-\theta_i = \frac{\exp (-(-1 + 2 x_i))}{1 + \exp (-(-1 + 2 x_i))} \end{equation}\]
The ratio of probability of lung cancer to probability of cancer-free is called odds:
\[\begin{align} \frac{\theta_i}{1-\theta_i} &=\exp (-1 + 2 x_i) \end{align}\]
so here \(\exp 2\approx 7.4\) gives the change to the odds for a one unit change in x_i. Because of this, \(\exp \beta_1\) is known as the odds ratio for that variable
Taking log of both sides:
\[\begin{equation} \log \frac{\theta_i}{1-\theta_i} = -1 + 2 x_i \end{equation}\]
so we see that \(\beta_1=2\) effectively gives the change to the log-odds for a one unit change in \(x_i\).
straightforward to extend the model to incorporate multiple regressions:
\[\begin{equation} f_\beta(x_i) := \frac{1}{1 + \exp (-(\beta_0 + \beta_1 x_{1,i} + ... + \beta_p x_{p,i}))} \end{equation}\]
All available on SOLO:
Coursera: